Goto

Collaborating Authors

 rosenbrock problem



Black-Box Optimization with Local Generative Surrogates Supplementary Material A Surrogates Implementation Details A.1 GAN Implementation

Neural Information Processing Systems

Original version of FFJORD does not have a support of conditional input. To address this issue we rewrote one of the base layers that were used in FFJORD library. The example of such statistics is presented in Fig 6. Figure 6: An example of monitored statistics during surrogate training for one iteration of optimization. The gradient bias is calculated per component of the gradient vector, i.e., if C.1 Procedure For Mixing Matrix Generation 10-dimensional mixing matrix A could be generated with the following Python code: C.3 Numerical Derivatives To obtain numerical derivatives of R we are using central difference scheme: f Muons are bent by the magnetic field and simultaneously experience stochastic scattering as they pass through the magnet which causes random variations in their trajectories. Color represents number of the hits in a bin.


Learning to Optimize for Mixed-Integer Non-linear Programming

Tang, Bo, Khalil, Elias B., Drgoňa, Ján

arXiv.org Artificial Intelligence

Mixed-integer non-linear programs (MINLPs) arise in various domains, such as energy systems and transportation, but are notoriously difficult to solve. Recent advances in machine learning have led to remarkable successes in optimization tasks, an area broadly known as learning to optimize. This approach includes using predictive models to generate solutions for optimization problems with continuous decision variables, thereby avoiding the need for computationally expensive optimization algorithms. However, applying learning to MINLPs remains challenging primarily due to the presence of integer decision variables, which complicate gradient-based learning. To address this limitation, we propose two differentiable correction layers that generate integer outputs while preserving gradient information. Combined with a soft penalty for constraint violation, our framework can tackle both the integrality and non-linear constraints in a MINLP. Experiments on three problem classes with convex/non-convex objective/constraints and integer/mixed-integer variables show that the proposed learning-based approach consistently produces high-quality solutions for parametric MINLPs extremely quickly. As problem size increases, traditional exact solvers and heuristic methods struggle to find feasible solutions, whereas our approach continues to deliver reliable results. Our work extends the scope of learning-to-optimize to MINLP, paving the way for integrating integer constraints into deep learning models. Our code is available at https://github.com/pnnl/L2O-pMINLP.


Differentiating the Black-Box: Optimization with Local Generative Surrogates

Shirobokov, Sergey, Belavin, Vladislav, Kagan, Michael, Ustyuzhanin, Andrey, Baydin, Atılım Güneş

arXiv.org Machine Learning

We propose a novel method for gradient-based optimization of black-box simulators using differentiable local surrogate models. In fields such as physics and engineering, many processes are modeled with non-differentiable simulators with intractable likelihoods. Optimization of these forward models is particularly challenging, especially when the simulator is stochastic. To address such cases, we introduce the use of deep generative models to iteratively approximate the simulator in local neighborhoods of the parameter space. We demonstrate that these local surrogates can be used to approximate the gradient of the simulator, and thus enable gradient-based optimization of simulator parameters. In cases where the dependence of the simulator on the parameter space is constrained to a low dimensional submanifold, we observe that our method attains minima faster than all baseline methods, including Bayesian optimization, numerical optimization, and REINFORCE driven approaches.